FedMDFG: Federated Learning with Multi-Gradient Descent and Fair Guidance

نویسندگان

چکیده

Fairness has been considered as a critical problem in federated learning (FL). In this work, we analyze two direct causes of unfairness FL - an unfair direction and improper step size when updating the model. To solve these issues, introduce effective way to measure fairness model through cosine similarity, then propose multiple gradient descent algorithm with fair guidance (FedMDFG) drive fairer. We first convert into multi-objective optimization (MOP) design advanced calculate by adding fair-driven objective MOP. A low-communication-cost line search strategy is designed find better for update. further show theoretical analysis on how it can enhance guarantee convergence. Finally, extensive experiments several scenarios verify that FedMDFG robust outperforms SOTA algorithms convergence fairness. The source code available at https://github.com/zibinpan/FedMDFG.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Multi-Timescale, Gradient Descent, Temporal Difference Learning with Linear Options

Deliberating on large or continuous state spaces have been long standing challenges in reinforcement learning. Temporal Abstraction have somewhat made this possible, but efficiently planing using temporal abstraction still remains an issue. Moreover using spatial abstractions to learn policies for various situations at once while using temporal abstraction models is an open problem. We propose ...

متن کامل

Learning to learn by gradient descent by gradient descent

The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorit...

متن کامل

Learning to Learn without Gradient Descent by Gradient Descent

We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-paramete...

متن کامل

Learning ReLUs via Gradient Descent

In this paper we study the problem of learning Rectified Linear Units (ReLUs) which are functions of the form x ↦ max(0, ⟨w,x⟩) with w ∈ R denoting the weight vector. We study this problem in the high-dimensional regime where the number of observations are fewer than the dimension of the weight vector. We assume that the weight vector belongs to some closed set (convex or nonconvex) which captu...

متن کامل

Learning by Online Gradient Descent

We study online gradient{descent learning in multilayer networks analytically and numerically. The training is based on randomly drawn inputs and their corresponding outputs as deened by a target rule. In the thermo-dynamic limit we derive deterministic diierential equations for the order parameters of the problem which allow an exact calculation of the evolution of the generalization error. Fi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2023

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v37i8.26122